Chest X-Rays (CXRs) are the most common diagnostic imaging tests in medicine. Although many present no abnormalities, other, currently unused information including posture, heart size, and spinal shape may inform risk of future disease. Our lab has shown that new artificial intelligence techniques (deep learning) can extract this prognostic information from CXRs to aid in disease prediction and prevention. Leveraging large clinical trial databases with CXR images, our lab has developed deep learning models to estimate “chest x-ray age” - a number in years that captures how healthy the chest x-ray appears overall (CXR-Age). This represents an individual’s biological age which aims to measure the effects of aging on an individual’s health. Using biological age may be useful in multiple risk calculators, like estimating a patient’s risk for cardiovascular disease. Many of these nationally used calculators use chronological age, which does not fully capture the effects of aging on an individual. Using CXR-Age will improve the predictive power of these calculators and of cardiovascular events.
CXR-Age is a convolutional neural network(CNN) and was developed using CXR data from over 100,000 individuals from publicly available cohorts. A CNN model takes an input image and the computer views it as a matrix of numbers. There are filters which pass over this matrix to pick up on specified relationships between the values representing the pixels of the image. Filters can be used to pick up edges, vertical lines, etc. The result of passing a filter over the image is called a feature map which depicts where the certain features specified by the filter were on the original image. This first set of feature maps is called the convolution layer. Then, additional filters can be used on these generated feature maps to create another layer, and another, resulting in multiple connected layers, representing a network.